Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
1.
Med Image Anal ; 95: 103159, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38663318

RESUMO

We have developed a United framework that integrates three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning), enabling collaborative learning among the three learning ingredients and yielding three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this collaboration, we redesigned nine prominent self-supervised methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, TransVW, MoCo, BYOL, PCRL, and Swin UNETR, and augmented each with its missing components in a United framework for 3D medical imaging. However, such a United framework increases model complexity, making 3D pretraining difficult. To overcome this difficulty, we propose stepwise incremental pretraining, a strategy that unifies the pretraining, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning. Last, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models pretraining, resulting in significant performance gains and annotation cost reduction via transfer learning in six target tasks, ranging from classification to segmentation, across diseases, organs, datasets, and modalities. This performance improvement is attributed to the synergy of the three SSL ingredients in our United framework unleashed through stepwise incremental pretraining. Our codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38631369

RESUMO

Interstitial lung disorders are a group of respiratory diseases characterized by interstitial compartment infiltration, varying degrees of infiltration, and fibrosis, with or without small airway involvement. Although some are idiopathic (e.g., idiopathic pulmonary fibrosis, idiopathic interstitial pneumonias, and sarcoidosis), the great majority have an underlying etiology, such as systemic autoimmune rheumatic disease (SARD, also called Connective Tissue Diseases or CTD), inhalational exposure to organic matter, medications, and rarely, genetic disorders. This review focuses on diagnostic approaches in interstitial lung diseases associated with SARDs. To make an accurate diagnosis, a multidisciplinary, personalized approach is required, with input from various specialties, including pulmonary, rheumatology, radiology, and pathology, to reach a consensus. In a minority of patients, a definitive diagnosis cannot be established. Their clinical presentations and prognosis can be variable even within subsets of SARDs.

3.
Med Image Anal ; 94: 103086, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38537414

RESUMO

Discriminative, restorative, and adversarial learning have proven beneficial for self-supervised learning schemes in computer vision and medical imaging. Existing efforts, however, fail to capitalize on the potentially synergistic effects these methods may offer in a ternary setup, which, we envision can significantly benefit deep semantic representation learning. Towards this end, we developed DiRA, the first framework that unites discriminative, restorative, and adversarial learning in a unified manner to collaboratively glean complementary visual information from unlabeled medical images for fine-grained semantic representation learning. Our extensive experiments demonstrate that DiRA: (1) encourages collaborative learning among three learning ingredients, resulting in more generalizable representation across organs, diseases, and modalities; (2) outperforms fully supervised ImageNet models and increases robustness in small data regimes, reducing annotation cost across multiple medical imaging applications; (3) learns fine-grained semantic representation, facilitating accurate lesion localization with only image-level annotation; (4) improves reusability of low/mid-level features; and (5) enhances restorative self-supervised approaches, revealing that DiRA is a general framework for united representation learning. Code and pretrained models are available at https://github.com/JLiangLab/DiRA.


Assuntos
Doenças Hereditárias Autoinflamatórias , Humanos , Semântica , Aprendizado de Máquina Supervisionado , Proteína Antagonista do Receptor de Interleucina 1
4.
Med Image Anal ; 91: 102988, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37924750

RESUMO

Pulmonary Embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients death. This disorder is commonly diagnosed using Computed Tomography Pulmonary Angiography (CTPA). Deep learning holds great promise for the Computer-aided Diagnosis (CAD) of PE. However, numerous deep learning methods, such as Convolutional Neural Networks (CNN) and Transformer-based models, exist for a given task, causing great confusion regarding the development of CAD systems for PE. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis based on four datasets. First, we use the RSNA PE dataset, which includes (weak) slice-level and exam-level labels, for PE classification and diagnosis, respectively. At the slice level, we compare CNNs with the Vision Transformer (ViT) and the Swin Transformer. We also investigate the impact of self-supervised versus (fully) supervised ImageNet pre-training, and transfer learning over training models from scratch. Additionally, at the exam level, we compare sequence model learning with our proposed transformer-based architecture, Embedding-based ViT (E-ViT). For the second and third datasets, we utilize the CAD-PE Challenge Dataset and Ferdowsi University of Mashad's PE Dataset, where we convert (strong) clot-level masks into slice-level annotations to evaluate the optimal CNN model for slice-level PE classification. Finally, we use our in-house PE-CAD dataset, which contains (strong) clot-level masks. Here, we investigate the impact of our vessel-oriented image representations and self-supervised pre-training on PE false positive reduction at the clot level across image dimensions (2D, 2.5D, and 3D). Our experiments show that (1) transfer learning boosts performance despite differences between photographic images and CTPA scans; (2) self-supervised pre-training can surpass (fully) supervised pre-training; (3) transformer-based models demonstrate comparable performance but slower convergence compared with CNNs for slice-level PE classification; (4) model trained on the RSNA PE dataset demonstrates promising performance when tested on unseen datasets for slice-level PE classification; (5) our E-ViT framework excels in handling variable numbers of slices and outperforms sequence model learning for exam-level diagnosis; and (6) vessel-oriented image representation and self-supervised pre-training both enhance performance for PE false positive reduction across image dimensions. Our optimal approach surpasses state-of-the-art results on the RSNA PE dataset, enhancing AUC by 0.62% (slice-level) and 2.22% (exam-level). On our in-house PE-CAD dataset, 3D vessel-oriented images improve performance from 80.07% to 91.35%, a remarkable 11% gain. Codes are available at GitHub.com/JLiangLab/CAD_PE.


Assuntos
Diagnóstico por Computador , Embolia Pulmonar , Humanos , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Imageamento Tridimensional , Embolia Pulmonar/diagnóstico por imagem , Computadores
5.
Am J Surg Pathol ; 47(3): 281-295, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36597787

RESUMO

The use of lymphoid interstitial pneumonia (LIP) as a diagnostic term has changed considerably since its introduction. Utilizing a multi-institutional collection of 201 cases from the last 20 years that demonstrate features associated with the LIP rubric, we compared cases meeting strict histologic criteria of LIP per American Thoracic Society (ATS)/European Respiratory Society (ERS) consensus ("pathologic LIP"; n=62) with cystic cases fulfilling radiologic ATS/ERS criteria ("radiologic LIP"; n=33) and with other diffuse benign lymphoid proliferations. "Pathologic LIP" was associated with immune dysregulation including autoimmune disorders and immune deficiency, whereas "radiologic LIP" was only seen with autoimmune disorders. No case of idiopathic LIP was found. On histology, "pathologic LIP" represented a subgroup of 70% (62/88) of cases with the distinctive pattern of diffuse expansile lymphoid infiltrates. In contrast, "radiologic LIP" demonstrated a broad spectrum of inflammatory patterns, airway-centered inflammation being most common (52%; 17/33). Only 5 cases with radiologic cysts also met consensus ATS/ERS criteria for "pathologic LIP." Overall, broad overlap was observed with the remaining study cases that failed to meet consensus criteria for "radiologic LIP" and/or "pathologic LIP." These data raise concerns about the practical use of the term LIP as currently defined. What radiologists and pathologist encounter as LIP differs remarkably, but neither "radiologic LIP" nor "pathologic LIP" present with sufficiently distinct findings to delineate such cases from other patterns of diffuse benign lymphoid proliferations. As a result of this study, we believe LIP should be abandoned as a pathologic and radiologic diagnosis.


Assuntos
Pneumonias Intersticiais Idiopáticas , Doenças Pulmonares Intersticiais , Humanos , Doenças Pulmonares Intersticiais/patologia , Pulmão/patologia , Pneumonias Intersticiais Idiopáticas/diagnóstico , Pneumonias Intersticiais Idiopáticas/patologia , Radiografia
6.
Proc Mach Learn Res ; 172: 535-551, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36579134

RESUMO

Recently, self-supervised instance discrimination methods have achieved significant success in learning visual representations from unlabeled photographic images. However, given the marked differences between photographic and medical images, the efficacy of instance-based objectives, focusing on learning the most discriminative global features in the image (i.e., wheels in bicycle), remains unknown in medical imaging. Our preliminary analysis showed that high global similarity of medical images in terms of anatomy hampers instance discrimination methods for capturing a set of distinct features, negatively impacting their performance on medical downstream tasks. To alleviate this limitation, we have developed a simple yet effective self-supervised framework, called Context-Aware instance Discrimination (CAiD). CAiD aims to improve instance discrimination learning by providing finer and more discriminative information encoded from a diverse local context of unlabeled medical images. We conduct a systematic analysis to investigate the utility of the learned features from a three-pronged perspective: (i) generalizability and transferability, (ii) separability in the embedding space, and (iii) reusability. Our extensive experiments demonstrate that CAiD (1) enriches representations learned from existing instance discrimination methods; (2) delivers more discriminative features by adequately capturing finer contextual information from individual medial images; and (3) improves reusability of low/mid-level features compared to standard instance discriminative methods. As open science, all codes and pre-trained models are available on our GitHub page: https://github.com/JLiangLab/CAiD.

7.
Domain Adapt Represent Transf (2022) ; 13542: 77-87, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36507898

RESUMO

Vision transformer-based self-supervised learning (SSL) approaches have recently shown substantial success in learning visual representations from unannotated photographic images. However, their acceptance in medical imaging is still lukewarm, due to the significant discrepancy between medical and photographic images. Consequently, we propose POPAR (patch order prediction and appearance recovery), a novel vision transformer-based self-supervised learning framework for chest X-ray images. POPAR leverages the benefits of vision transformers and unique properties of medical imaging, aiming to simultaneously learn patch-wise high-level contextual features by correcting shuffled patch orders and fine-grained features by recovering patch appearance. We transfer POPAR pretrained models to diverse downstream tasks. The experiment results suggest that (1) POPAR outperforms state-of-the-art (SoTA) self-supervised models with vision transformer backbone; (2) POPAR achieves significantly better performance over all three SoTA contrastive learning methods; and (3) POPAR also outperforms fully-supervised pretrained models across architectures. In addition, our ablation study suggests that to achieve better performance on medical imaging tasks, both fine-grained and global contextual features are preferred. All code and models are available at GitHub.com/JLiangLab/POPAR.

8.
Domain Adapt Represent Transf (2022) ; 13542: 66-76, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36507899

RESUMO

Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, and TransVW, and formulated each in a United framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

9.
Domain Adapt Represent Transf (2022) ; 13542: 12-22, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36383492

RESUMO

Visual transformers have recently gained popularity in the computer vision community as they began to outrank convolutional neural networks (CNNs) in one representative visual benchmark after another. However, the competition between visual transformers and CNNs in medical imaging is rarely studied, leaving many important questions unanswered. As the first step, we benchmark how well existing transformer variants that use various (supervised and self-supervised) pre-training methods perform against CNNs on a variety of medical classification tasks. Furthermore, given the data-hungry nature of transformers and the annotation-deficiency challenge of medical imaging, we present a practical approach for bridging the domain gap between photographic and medical images by utilizing unlabeled large-scale in-domain data. Our extensive empirical evaluations reveal the following insights in medical imaging: (1) good initialization is more crucial for transformer-based models than for CNNs, (2) self-supervised learning based on masked image modeling captures more generalizable representations than supervised models, and (3) assembling a larger-scale domain-specific dataset can better bridge the domain gap between photographic and medical images via self-supervised continuous pre-training. We hope this benchmark study can direct future research on applying transformers to medical imaging analysis. All codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransformers.

10.
Artigo em Inglês | MEDLINE | ID: mdl-36313959

RESUMO

Discriminative learning, restorative learning, and adversarial learning have proven beneficial for self-supervised learning schemes in computer vision and medical imaging. Existing efforts, however, omit their synergistic effects on each other in a ternary setup, which, we envision, can significantly benefit deep semantic representation learning. To realize this vision, we have developed DiRA, the first framework that unites discriminative, restorative, and adversarial learning in a unified manner to collaboratively glean complementary visual information from unlabeled medical images for fine-grained semantic representation learning. Our extensive experiments demonstrate that DiRA (1) encourages collaborative learning among three learning ingredients, resulting in more generalizable representation across organs, diseases, and modalities; (2) outperforms fully supervised ImageNet models and increases robustness in small data regimes, reducing annotation cost across multiple medical imaging applications; (3) learns fine-grained semantic representation, facilitating accurate lesion localization with only image-level annotation; and (4) enhances state-of-the-art restorative approaches, revealing that DiRA is a general mechanism for united representation learning. All code and pretrained models are available at https://github.com/JLiangLab/DiRA.

11.
JACC Case Rep ; 4(8): 476-480, 2022 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-35493796

RESUMO

Although infrequent, damage to cardiovascular structures can occur during or following a minimally invasive repair of pectus excavatum. We present a case of right ventricular outflow tract compression caused by a displaced intrathoracic bar. Removal of the bar resulted in an improvement in symptoms and hemodynamics. (Level of Difficulty: Advanced.).

12.
J Am Heart Assoc ; 11(7): e022149, 2022 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-35377159

RESUMO

Background Pectus excavatum is the most common chest wall deformity. There is still controversy about cardiopulmonary limitations of this disease and benefits of surgical repair. This study evaluates the impact of pectus excavatum on the cardiopulmonary function of adult patients before and after a modified minimally invasive repair. Methods and Results In this retrospective cohort study, an electronic database was used to identify consecutive adult (aged ≥18 years) patients who underwent cardiopulmonary exercise testing before and after primary pectus excavatum repair at Mayo Clinic Arizona from 2011 to 2020. In total, 392 patients underwent preoperative cardiopulmonary exercise testing; abnormal oxygen consumption results were present in 68% of patients. Among them, 130 patients (68% men, mean age, 32.4±10.0 years) had post-repair evaluations. Post-repair tests were performed immediately before bar removal with a mean time between repair and post-repair testing of 3.4±0.7 years (range, 2.5-7.0). A significant improvement in cardiopulmonary outcomes (P<0.001 for all the comparisons) was seen in the post-repair evaluations, including an increase in maximum, and predicted rate of oxygen consumption, oxygen pulse, oxygen consumption at anaerobic threshold, and maximal ventilation. In a subanalysis of 39 patients who also underwent intraoperative transesophageal echocardiography at repair and at bar removal, a significant increase in right ventricle stroke volume was found (P<0.001). Conclusions Consistent improvements in cardiopulmonary function were seen for pectus excavatum adult patients undergoing surgery. These results strongly support the existence of adverse cardiopulmonary consequences from this disease as well as the benefits of surgical repair.


Assuntos
Tórax em Funil , Adolescente , Adulto , Feminino , Tórax em Funil/cirurgia , Humanos , Pulmão , Masculino , Período Pós-Operatório , Estudos Retrospectivos , Resultado do Tratamento , Adulto Jovem
13.
Radiographics ; 42(1): 38-55, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34826256

RESUMO

Medication-induced pulmonary injury (MIPI) is a complex medical condition that has become increasingly common yet remains stubbornly difficult to diagnose. Diagnosis can be aided by combining knowledge of the most common imaging patterns caused by MIPI with awareness of which medications a patient may be exposed to in specific clinical settings. The authors describe six imaging patterns commonly associated with MIPI: sarcoidosis-like, diffuse ground-glass opacities, organizing pneumonia, centrilobular ground-glass nodules, linear-septal, and fibrotic. Subsequently, the occurrence of these patterns is discussed in the context of five different clinical scenarios and the medications and medication classes typically used in those scenarios. These scenarios and medication classes include the rheumatology or gastrointestinal clinic (disease-modifying antirheumatic agents), cardiology clinic (antiarrhythmics), hematology clinic (cytotoxic agents, tyrosine kinase inhibitors, retinoids), oncology clinic (immune modulators, tyrosine kinase inhibitors, monoclonal antibodies), and inpatient service (antibiotics, blood products). Additionally, the article draws comparisons between the appearance of MIPI and the alternative causes of lung disease typically seen in those clinical scenarios (eg, connective tissue disease-related interstitial lung disease in the rheumatology clinic and hydrostatic pulmonary edema in the cardiology clinic). Familiarity with the most common imaging patterns associated with frequently administered medications can help insert MIPI into the differential diagnosis of acquired lung disease in these scenarios. However, confident diagnosis is often thwarted by absence of specific diagnostic tests for MIPI. Instead, a working diagnosis typically relies on multidisciplinary consensus. ©RSNA, 2021.


Assuntos
Doenças do Tecido Conjuntivo , Doenças Pulmonares Intersticiais , Lesão Pulmonar , Humanos , Pulmão , Lesão Pulmonar/induzido quimicamente , Lesão Pulmonar/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
14.
Med Image Anal ; 71: 101997, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33853034

RESUMO

The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek "worthy" samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.


Assuntos
Diagnóstico por Imagem , Redes Neurais de Computação , Humanos , Estudos Longitudinais
15.
Med Mycol ; 59(8): 834-841, 2021 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-33724424

RESUMO

Approximately 5 to 15% of patients with pulmonary coccidioidomycosis subsequently develop pulmonary cavities. These cavities may resolve spontaneously over a number of years; however, some cavities never close, and a small proportion causes complications such as hemorrhage, pneumothorax or empyema. The impact of azole antifungal treatment on coccidioidal cavities has not been studied. Because azoles are a common treatment for symptomatic pulmonary coccidioidomycosis, we aimed to assess the impact of azole therapy on cavity closure. From January 1, 2004, through December 31, 2014, we retrospectively identified 313 patients with cavitary coccidioidomycosis and excluded 42 who had the cavity removed surgically, leaving 271 data sets available for study. Of the 271 patients, 221 (81.5%) received azole therapy during 5-year follow-up; 50 patients did not receive antifungal treatment. Among the 271 patients, cavities closed in 38 (14.0%). Statistical modeling showed that cavities were more likely to close in patients in the treated group than in the nontreated group (hazard ratio, 2.14 [95% CI: 1.45-5.66]). Cavities were less likely to close in active smokers than nonsmokers (11/41 [26.8%] vs 97/182 [53.3%]; P = 0.002) or in persons with than without diabetes (27/74 [36.5%] vs 81/149 [54.4%]; P = 0.01).We did not find an association between cavity size and closure. Our findings provide rationale for further study of treatment protocols in this subset of patients with coccidioidomycosis. LAY SUMMARY: Coccidioidomycosis, known as valley fever, is a fungal infection that infrequently causes cavities to form in the lungs, which potentially results in long-term lung symptoms. We learned that cavities closed more often in persons who received antifungal drugs, but most cavities never closed completely.


Assuntos
Antifúngicos/uso terapêutico , Azóis/uso terapêutico , Coccidioidomicose/tratamento farmacológico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Coccidioidomicose/complicações , Coccidioidomicose/epidemiologia , Comorbidade , Complicações do Diabetes/tratamento farmacológico , Complicações do Diabetes/epidemiologia , Feminino , Humanos , Terapia de Imunossupressão , Masculino , Pessoa de Meia-Idade , Neoplasias/complicações , Doença Pulmonar Obstrutiva Crônica/complicações , Doença Pulmonar Obstrutiva Crônica/epidemiologia , Estudos Retrospectivos , Fumantes , Transplantados , Adulto Jovem
16.
IEEE Trans Med Imaging ; 40(10): 2857-2868, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33617450

RESUMO

This paper introduces a new concept called "transferable visual words" (TransVW), aiming to achieve annotation efficiency for deep learning in medical image analysis. Medical imaging-focusing on particular parts of the body for defined clinical purposes-generates images of great similarity in anatomy across patients and yields sophisticated anatomical patterns across images, which are associated with rich semantics about human anatomy and which are natural visual words. We show that these visual words can be automatically harvested according to anatomical consistency via self-discovery, and that the self-discovered visual words can serve as strong yet free supervision signals for deep models to learn semantics-enriched generic image representation via self-supervision (self-classification and self-restoration). Our extensive experiments demonstrate the annotation efficiency of TransVW by offering higher performance and faster convergence with reduced annotation cost in several applications. Our TransVW has several important advantages, including (1) TransVW is a fully autodidactic scheme, which exploits the semantics of visual words for self-supervised learning, requiring no expert annotation; (2) visual word learning is an add-on strategy, which complements existing self-supervised methods, boosting their performance; and (3) the learned image representation is semantics-enriched models, which have proven to be more robust and generalizable, saving annotation efforts for a variety of applications through transfer learning. Our code, pre-trained models, and curated visual words are available at https://github.com/JLiangLab/TransVW.


Assuntos
Diagnóstico por Imagem , Semântica , Humanos , Radiografia , Aprendizado de Máquina Supervisionado
17.
Mach Learn Med Imaging ; 12966: 692-702, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35695860

RESUMO

Pulmonary embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients, death. This disorder is commonly diagnosed using CT pulmonary angiography (CTPA). Deep learning holds great promise for the computer-aided CTPA diagnosis (CAD) of PE. However, numerous competing methods for a given task in the deep learning literature exist, causing great confusion regarding the development of a CAD PE system. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis using CTPA at the both image and exam levels. At the image level, we compare convolutional neural networks (CNNs) with vision transformers, and contrast self-supervised learning (SSL) with supervised learning, followed by an evaluation of transfer learning compared with training from scratch. At the exam level, we focus on comparing conventional classification (CC) with multiple instance learning (MIL). Our extensive experiments consistently show: (1) transfer learning consistently boosts performance despite differences between natural images and CT scans, (2) transfer learning with SSL surpasses its supervised counterparts; (3) CNNs outperform vision transformers, which otherwise show satisfactory performance; and (4) CC is, surprisingly, superior to MIL. Compared with the state of the art, our optimal approach provides an AUC gain of 0.2% and 1.05% for image-level and exam-level, respectively.

18.
Artigo em Inglês | MEDLINE | ID: mdl-35713581

RESUMO

Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pre-training techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransferLearning.

19.
Virchows Arch ; 478(1): 81-88, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33106908

RESUMO

The use of electronic nicotine delivery systems has increased in popularity dramatically over the past decade. Although lung diseases caused by vaping have been reported since the modern invention of the electronic cigarette, in the summer of 2019, patients began to present to health care centers at epidemic levels with an acute respiratory illness relating to vaping, which the Center for Disease Control termed E-cigarette or vaping product use-associated lung injury (EVALI). This review discusses electronic nicotine delivery systems as well as the etiology, clinical presentation, imaging findings, pathologic features, treatment, and long-term consequences of EVALI. We conclude with the practical impact EVALI has had on the practice of pathology.


Assuntos
Sistemas Eletrônicos de Liberação de Nicotina , Lesão Pulmonar/etiologia , Vaping/efeitos adversos , Humanos
20.
Med Image Anal ; 67: 101840, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33188996

RESUMO

Transfer learning from natural image to medical image has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA